17 research outputs found

    The role of linguistic contrasts in the auditory feedback control of Speech

    Get PDF
    Thesis (Ph. D. in Speech and Hearing Bioscience and Technology)--Harvard-MIT Division of Health Sciences and Technology, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 165-180).Speakers use auditory feedback to monitor their own speech, ensuring that the intended output matches the observed output. By altering the acoustic feedback signal before it reaches the speaker's ear, we can induce auditory errors: differences between what is expected and what is heard. This dissertation investigates the neural mechanisms responsible for the detection and consequent correction of these auditory errors. Linguistic influences on feedback control were assessed in two experiments employing auditory perturbation. In a behavioral experiment, subjects spoke four-word sentences while the fundamental frequency (FO) of the stressed word was perturbed either upwards or downwards, causing the word to sound more or less stressed. Subjects adapted by altering both the FO and the intensity contrast between stressed and unstressed words, even though intensity remained unperturbed. An integrated model of prosodic control is proposed in which FO and intensity are modulated together to achieve a stress target. In a second experiment, functional magnetic resonance imaging was used to measure neural responses to speech with and without auditory perturbation. Subjects were found to compensate more for formant shifts that resulted in a phonetic category change than for formant shifts that did not, despite the identical magnitudes of the shifts. Furthermore, the extent of neural activation in superior temporal and inferior frontal regions was greater for cross-category than for within-category shifts, evidence that a stronger cortical error signal accompanies a linguistically-relevant acoustic change. Taken together, these results demonstrate that auditory feedback control is sensitive to linguistic contrasts learned through auditory experience.by Caroline A. Niziolek.Ph.D.in Speech and Hearing Bioscience and Technolog

    Vowel Category Boundaries Enhance Cortical and Behavioral Responses to Speech Feedback Alterations

    No full text
    Auditory feedback is instrumental in the online control of speech, allowing speakers to compare their self-produced speech signal with a desired auditory target and correct for errors. However, there is little account of the representation of “target” and “error”: does error depend purely on acoustic distance from a target, or is error enhanced by phoneme category changes? Here, we show an effect of vowel boundaries on compensatory responses to a real-time auditory perturbation. While human subjects spoke monosyllabic words, event-triggered functional magnetic resonance imaging was used to characterize neural responses to unexpected changes in auditory feedback. Capitalizing on speakers' natural variability, we contrasted the responses to feedback perturbations applied to two classes of utterances: (1) those that fell nearer to the category boundary, for which perturbations were designed to change the phonemic identity of the heard speech; and (2) those that fell farther from the boundary, for which perturbations resulted in only sub-phonemic auditory differences. Subjects' behavioral compensation was more than three times greater when feedback shifts were applied nearer to a category boundary. Furthermore, a near-boundary shift resulted in stronger cortical responses, most notably in right posterior superior temporal gyrus, than an identical shift that occurred far from the boundary. Across participants, a correlation was found between the amount of compensation to the perturbation and the amount of activity in a network of superior temporal and inferior frontal brain regions. Together, these results demonstrate that auditory feedback control of speech is sensitive to linguistic categories learned through auditory experience.National Institutes of Health (U.S.) (Grant R01 DC002852

    What does motor efference copy represent? Evidence from speech production.

    No full text
    How precisely does the brain predict the sensory consequences of our actions? Efference copy is thought to reflect the predicted sensation of self-produced motor acts, such as the auditory feedback heard while speaking. Here, we use magnetoencephalographic imaging (MEG-I) in human speakers to demonstrate that efference copy prediction does not track movement variability across repetitions of the same motor task. Specifically, spoken vowels were less accurately predicted when they were less similar to a speaker's median production, even though the prediction is thought to be based on the very motor commands that generate each vowel. Auditory cortical responses to less prototypical speech productions were less suppressed, resembling responses to speech errors, and were correlated with later corrective movement, suggesting that the suppression may be functionally significant for error correction. The failure of the motor system to accurately predict less prototypical speech productions suggests that the efferent-driven suppression does not reflect a sensory prediction, but a sensory goal

    What does motor efference copy represent? Evidence from speech production.

    No full text
    How precisely does the brain predict the sensory consequences of our actions? Efference copy is thought to reflect the predicted sensation of self-produced motor acts, such as the auditory feedback heard while speaking. Here, we use magnetoencephalographic imaging (MEG-I) in human speakers to demonstrate that efference copy prediction does not track movement variability across repetitions of the same motor task. Specifically, spoken vowels were less accurately predicted when they were less similar to a speaker's median production, even though the prediction is thought to be based on the very motor commands that generate each vowel. Auditory cortical responses to less prototypical speech productions were less suppressed, resembling responses to speech errors, and were correlated with later corrective movement, suggesting that the suppression may be functionally significant for error correction. The failure of the motor system to accurately predict less prototypical speech productions suggests that the efferent-driven suppression does not reflect a sensory prediction, but a sensory goal

    Auditory cortex processes variation in our own speech.

    Get PDF
    As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered "ah" and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production

    Online adaptation to altered auditory feedback is predicted by auditory acuity and not by domain-general executive control resources

    Get PDF
    When a speaker's auditory feedback is altered, he adapts for the perturbation by altering his own production, which demonstrates the role of auditory feedback in speech motor control. In the present study, we explored the role of auditory acuity and executive control in this process. Based on the DIVA model and the major cognitive control models, we expected that higher auditory acuity, and better executive control skills would predict larger adaptation to the alteration. Thirty-six Spanish native speakers performed an altered auditory feedback experiment, executive control (numerical Stroop, Simon and Flanker) tasks, and auditory acuity tasks (loudness, pitch, and melody pattern discrimination). In the altered feedback experiment, participants had to produce the pseudoword “pep” (/pep/) while perceiving their auditory feedback in real time through earphones. The auditory feedback was first unaltered and then progressively altered in F1 and F2 dimensions until maximal alteration (F1 −150 Hz; F2 +300 Hz). The normalized distance of maximal adaptation ranged from 4 to 137 Hz (median of 75 ± 36). The different measures of auditory acuity were significant predictors of adaptation, while individual measures of cognitive function skills (obtained from the executive control tasks) were not. Better auditory discriminators adapted more to the alteration. We conclude that adaptation to altered auditory feedback is very well-predicted by general auditory acuity, as suggested by the DIVA model. In line with the framework of motor-control models, no specific claim on the implication of executive resources in speech motor control can be made.peerReviewe

    Online Adaptation to Altered Auditory Feedback Is Predicted by Auditory Acuity and Not by Domain-General Executive Control Resources

    No full text
    When a speaker's auditory feedback is altered, he adapts for the perturbation by altering his own production, which demonstrates the role of auditory feedback in speech motor control. In the present study, we explored the role of auditory acuity and executive control in this process. Based on the DIVA model and the major cognitive control models, we expected that higher auditory acuity, and better executive control skills would predict larger adaptation to the alteration. Thirty-six Spanish native speakers performed an altered auditory feedback experiment, executive control (numerical Stroop, Simon and Flanker) tasks, and auditory acuity tasks (loudness, pitch, and melody pattern discrimination). In the altered feedback experiment, participants had to produce the pseudoword “pep” (/pep/) while perceiving their auditory feedback in real time through earphones. The auditory feedback was first unaltered and then progressively altered in F1 and F2 dimensions until maximal alteration (F1 −150 Hz; F2 +300 Hz). The normalized distance of maximal adaptation ranged from 4 to 137 Hz (median of 75 ± 36). The different measures of auditory acuity were significant predictors of adaptation, while individual measures of cognitive function skills (obtained from the executive control tasks) were not. Better auditory discriminators adapted more to the alteration. We conclude that adaptation to altered auditory feedback is very well-predicted by general auditory acuity, as suggested by the DIVA model. In line with the framework of motor-control models, no specific claim on the implication of executive resources in speech motor control can be made
    corecore